Goto

Collaborating Authors

 generative modeling


Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series

Neural Information Processing Systems

Lately, there has been a surge in interest surrounding generative modeling of time series data. Most existing approaches are designed either to process short sequences or to handle long-range sequences. This dichotomy can be attributed to gradient issues with recurrent networks, computational costs associated with transformers, and limited expressiveness of state space models. Towards a unified generative model for varying-length time series, we propose in this work to transform sequences into images.






Generative Modeling by Estimating Gradients of the Data Distribution

Yang Song, Stefano Ermon

Neural Information Processing Systems

Generative models have many applications in machine learning. To list a few, they have been usedtogenerate high-fidelity images [26,6],synthesize realistic speech andmusic fragments [58], improve the performance of semi-supervised learning [28, 10], detect adversarial examples and other anomalous data [54], imitation learning [22], and explore promising states in reinforcement learning [41].